14 research outputs found

    Enhancing Graph Representation of the Environment through Local and Cloud Computation

    Full text link
    Enriching the robot representation of the operational environment is a challenging task that aims at bridging the gap between low-level sensor readings and high-level semantic understanding. Having a rich representation often requires computationally demanding architectures and pure point cloud based detection systems that struggle when dealing with everyday objects that have to be handled by the robot. To overcome these issues, we propose a graph-based representation that addresses this gap by providing a semantic representation of robot environments from multiple sources. In fact, to acquire information from the environment, the framework combines classical computer vision tools with modern computer vision cloud services, ensuring computational feasibility on onboard hardware. By incorporating an ontology hierarchy with over 800 object classes, the framework achieves cross-domain adaptability, eliminating the need for environment-specific tools. The proposed approach allows us to handle also small objects and integrate them into the semantic representation of the environment. The approach is implemented in the Robot Operating System (ROS) using the RViz visualizer for environment representation. This work is a first step towards the development of a general-purpose framework, to facilitate intuitive interaction and navigation across different domains.Comment: 5 pages, 4 figure

    Skin Lesion Area Segmentation Using Attention Squeeze U-Net for Embedded Devices

    Get PDF
    Melanoma is the deadliest form of skin cancer. Early diagnosis of malignant lesions is crucial for reducing mortality. The use of deep learning techniques on dermoscopic images can help in keeping track of the change over time in the appearance of the lesion, which is an important factor for detecting malignant lesions. In this paper, we present a deep learning architecture called Attention Squeeze U-Net for skin lesion area segmentation specifically designed for embedded devices. The main goal is to increase the patient empowerment through the adoption of deep learning algorithms that can run locally on smartphones or low cost embedded devices. This can be the basis to (1) create a history of the lesion, (2) reduce patient visits to the hospital, and (3) protect the privacy of the users. Quantitative results on publicly available data demonstrate that it is possible to achieve good segmentation results even with a compact model

    Perspective Chapter: European Robotics League – Benchmarking through Smart City Robot Competitions

    Get PDF
    The SciRoc project, started in 2018, is an EU-H2020 funded project supporting the European Robotics League (ERL) and builds on the success of the EU-FP7/H2020 projects RoCKIn, euRathlon, EuRoC and ROCKEU2. The ERL is a framework for robot competitions currently consisting of three challenges: ERL Consumer, ERL Professional and ERL Emergency. These three challenge scenarios are set up in urban environments and converge every two years under one major tournament: the ERL Smart Cities Challenge. Smart cities are a new urban innovation paradigm promoting the use of advanced technologies to improve citizens’ quality of life. A key novelty of the SciRoc project is the ERL Smart Cities Challenge, which aims to show how robots will integrate into the cities of the future as physical agents. The SciRoc Project ran two such ERL Smart Cities Challenges, the first in Milton Keynes, UK (2019) and the second in Bologna, Italy (2021). In this chapter we evaluate the three challenges of the ERL, explain why the SciRoc project introduced a fourth challenge to bring robot benchmarking to Smart Cities and outline the process in conducting a Smart City event under the ERL umbrella. These innovations may pave the way for easier robotic benchmarking in the future

    Preserving HRI Capabilities: Physical, Remote and Simulated Modalities in the SciRoc 2021 Competition

    No full text
    In the last years, robots are moving out of research laboratories to enter everyday life. Competitions aiming at benchmarking the capabilities of a robot in everyday scenarios are useful to make a step forward in this path. In fact, they foster the development of robust architectures capable of solving issues that might occur during human-robot coexistence in human-shaped scenarios. One of those competitions is SciRoc that, in its second edition, proposed new benchmarking environments. In particular, Episode 1 of SciRoc 2 proposed three different modalities of participation while preserving the Human-Robot Interaction (HRI), being a fundamental benchmarking functionality. The Coffee Shop environment, used to challenge the participating teams, represented an excellent testbed enabling for the benchmarking of different robotics functionalities, but also an exceptional opportunity for proposing novel solutions to guarantee real human-robot interaction procedures despite the Covid-19 pandemic restrictions. The developed software is publicly released

    Enhancing graph representation of the environment through local and cloud computation

    No full text
    Enriching the robot representation of the operational environment is a challenging task that aims at bridging the gap between low-level sensor readings and high-level semantic understanding. Having a rich representation often requires computationally demanding architectures and pure point cloud based detection systems that struggle when dealing with everyday objects that have to be handled by the robot. To overcome these issues, we propose a graph-based representation that addresses this gap by providing a semantic representation of robot environments from multiple sources. In fact, to acquire information from the environment, the framework combines classical computer vision tools with modern computer vision cloud services, ensuring computational feasibility on onboard hardware. By incorporating an ontology hierarchy with over 800 object classes, the framework achieves cross-domain adaptability, eliminating the need for environment-specific tools. The proposed approach allows us to handle also small objects and integrate them into the semantic representation of the environment. The approach is implemented in the Robot Operating System (ROS) using the RViz visualizer for environment representation. This work is a first step towards the development of a general-purpose framework, to facilitate intuitive interaction and navigation across different domains

    Game Strategies for Physical Robot Soccer Players: A Survey

    No full text
    Effective team strategies and joint decision-making processes are fundamental in modern robotic applications, where multiple units have to cooperate to achieve a common goal. The research community in artificial intelligence and robotics has launched robotic competitions to promote research and validate new approaches, by providing robust benchmarks to evaluate all the components of a multiagent system—ranging from hardware to high-level strategy learning. Among these competitions RoboCup has a prominent role, running one of the first worldwide multirobot competition (in the late 1990s), challenging researchers to develop robotic systems able to compete in the game of soccer. Robotic soccer teams are complex multirobot systems, where each unit shows individual skills, and solid teamwork by exchanging information about their local perceptions and intentions. In this survey, we dive into the techniques developed within the RoboCup framework by analyzing and commenting on them in detail. We highlight significant trends in the research conducted in the field and to provide commentaries and insights, about challenges and achievements in generating decision-making processes for multirobot adversarial scenarios. As an outcome, we provide an overview a body of work that lies at the intersection of three disciplines: Artificial intelligence, robotics, and games

    Questioning Items’ Link in Users’ Perception of a Training Robot for Elders

    No full text
    Socially Assistive robots are becoming more common in modern society. These robots can accomplish a variety of tasks for people that are exposed to isolation and difficulties. Among those, elderly people are the largest part, and with them, robotics can play new roles. Elderly people are the ones who usually suffer a major technological gap, and it is worth evaluating their perception when dealing with robots. To this end, the present work addresses the interaction of elderly people during a training session with a humanoid robot. The analysis has been carried out by means of a questionnaire, using four key factors: Motivation, Usability, Likability, and Sociability. The results can contribute to the design and the development of social interaction between robots and humans in training contexts to enhance the effectiveness of human-robot interaction

    Learning from the Crowd: Improving the Decision Making Process in Robot Soccer using the Audience Noise

    No full text
    Fan input and support is an important component in many individual and team sports, ranging from athletics to basketball. Audience interaction provides a consistent impact on the athletes’ performance. The analysis of the crowd noise can provide a global indication on the ongoing game situation, less conditioned by subjective factors that can influence a single fan. In this work, we exploit the collective intelligence of the audience of a robot soccer match to improve the performance of the robot players. In particular, audio features extracted from the crowd noise are used in a Reinforcement Learning process to possibly modify the game strategy. The effectiveness of the proposed approach is demonstrated by experiments on registered crowd noise samples from several past RoboCup SPL matches

    A Deep Learning Approach for Object Recognition with NAO Soccer Robots

    Get PDF
    The use of identical robots in the RoboCup Standard Platform League (SPL) made software development the key aspect to achieve good results in competitions. In particular, the visual detection process is crucial for extracting information about the environment. In this paper, we present a novel approach for object detection and classification based on Convolutional Neural Networks (CNN). The approach is designed to be used by NAO robots and is made of two stages: image region segmentation, for reducing the search space, and Deep Learning, for validation. The proposed method can be easily extended to deal with different objects and adapted to be used in other RoboCup leagues. Quantitative experiments have been conducted on a data set of annotated images captured in real conditions from NAO robots in action. The used data set is made available for the community. © 2017, Springer International Publishing AG
    corecore